mask r-cnn
Automatic Pith Detection in Tree Cross-Section Images Using Deep Learning
Liao, Tzu-I, Fakhry, Mahmoud, Varghese, Jibin Yesudas
Pith detection in tree cross-sections is essential for forestry and wood quality analysis but remains a manual, error-prone task. This study evaluates deep learning models -- YOLOv9, U-Net, Swin Transformer, DeepLabV3, and Mask R-CNN -- to automate the process efficiently. A dataset of 582 labeled images was dynamically augmented to improve generalization. Swin Transformer achieved the highest accuracy (0.94), excelling in fine segmentation. YOLOv9 performed well for bounding box detection but struggled with boundary precision. U-Net was effective for structured patterns, while DeepLabV3 captured multi-scale features with slight boundary imprecision. Mask R-CNN initially underperformed due to overlapping detections, but applying Non-Maximum Suppression (NMS) improved its IoU from 0.45 to 0.80. Generalizability was next tested using an oak dataset of 11 images from Oregon State University's Tree Ring Lab. Additionally, for exploratory analysis purposes, an additional dataset of 64 labeled tree cross-sections was used to train the worst-performing model to see if this would improve its performance generalizing to the unseen oak dataset. Key challenges included tensor mismatches and boundary inconsistencies, addressed through hyperparameter tuning and augmentation. Our results highlight deep learning's potential for tree cross-section pith detection, with model choice depending on dataset characteristics and application needs.
- North America > United States > Oregon > Benton County > Corvallis (0.41)
- South America > Uruguay (0.04)
- Asia > Taiwan (0.04)
- North America > Canada (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- North America > United States > Colorado (0.04)
- Asia (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Sensing and Signal Processing > Image Processing (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Oceania > Australia > South Australia > Adelaide (0.04)
- North America > Canada (0.04)
- Asia > China (0.04)
Image Segmentation and Classification of E-waste for Training Robots for Waste Segregation
Abstract--Industry partners provided a problem statement that involves classifying electronic waste using machine learning models, which will be utilized by pick-and-place robots for waste segregation. This was achieved by taking common electronic waste items, such as a mouse and a charger, unsol-dering them, and taking pictures to create a custom dataset. The state-of-the-art YOLOv11 model was trained and run to achieve 70 mAP in real-time. The Mask R-CNN model was also trained and achieved 41 mAP . The model can be integrated with pick-and-place robots to perform segregation of e-waste. Electronic waste (e-waste) is one of the fastest-growing solid waste streams globally [2].
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Asia > India (0.05)
Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub Robot
Ceola, Federico, Maiettini, Elisa, Pasquale, Giulia, Meanti, Giacomo, Rosasco, Lorenzo, Natale, Lorenzo
The visual system of a robot has different requirements depending on the application: it may require high accuracy or reliability, be constrained by limited resources or need fast adaptation to dynamically changing environments. In this work, we focus on the instance segmentation task and provide a comprehensive study of different techniques that allow adapting an object segmentation model in presence of novel objects or different domains. We propose a pipeline for fast instance segmentation learning designed for robotic applications where data come in stream. It is based on an hybrid method leveraging on a pre-trained CNN for feature extraction and fast-to-train Kernel-based classifiers. We also propose a training protocol that allows to shorten the training time by performing feature extraction during the data acquisition. We benchmark the proposed pipeline on two robotics datasets and we deploy it on a real robot, i.e. the iCub humanoid. To this aim, we adapt our method to an incremental setting in which novel objects are learned on-line by the robot. The code to reproduce the experiments is publicly available on GitHub.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Italy > Liguria > Genoa (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Italy > Umbria > Perugia Province > Perugia (0.04)
- North America > United States > Colorado (0.04)
- Asia (0.04)
- Asia > Taiwan (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)